1. Join Now

    AVForums.com uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Turning Money Into Light

How a combination of science, commerce and art has driven the technology behind filmmaking

by Steve Withers Aug 22, 2013

  • Movies Article


    Turning Money Into Light
    The cinema, as we know it, is over a hundred years old and continues to be one of the dominant forms of entertainment throughout the World. The evolution of this medium has been a constant battle between science, commerce and art - a battle that continues to this very day. This is the story of the innovations that have pushed cinema forward, in a continuous effort to thrill and entertain us with exciting new technologies.
    A man walks around a corner...
    These days it is generally recognised that Louis Le Prince was the “Father of Cinematography”. If you’re wondering why you’ve never heard of him, that’s because he disappeared from a train in 1890, leaving others to claim the title. The unfortunate Frenchman literally vanished, along with his luggage, and he was never heard of again - to this day his fate remains a mystery. However, before Le Prince’s own life became the plot of a movie - The Gentleman Vanishes perhaps? - he shot the first moving pictures on perforated paper film using a single lens camera. With titles like Man Walks Around A Corner and Traffic On Leeds Bridge, they might not be the greatest films ever made but they are among the most important. Le Prince’s opus of traffic crossing a bridge in Leeds was shot in October 1888, preceding the efforts of competitors like William Friese-Green by a year. Whilst Friese-Green may have been beaten to the punch by Louis Le Prince, the English inventor still had much to contribute to the nascent art of cinematography. It was Friese-Green who experimented with celluloid as a medium for motion picture cameras and in 1889 he patented a single lens camera that used perforated celluloid film for the first time.

    In 1891 Thomas Edison patented his Kinetograph which took a series of photographs on standard Eastman Kodak photographic emulsion coated onto a transparent celluloid strip 35mm wide. However it was Louis and Auguste Lumiere who, in 1895, perfected their Cinematographe, an apparatus that could take, print and project film - thus creating modern cinema. In the early days of cinema there were varying frame rates being used, depending on whose equipment you were using. In order for the human eye to perceive motion the frame rate needs to be at least 10-12 frames a second and in the case of William Friese-Green’s camera, the frame rate was 10 frames a second, whilst Louis Le Prince captured his images at a rate of 20 frames a second. In the period prior to the introduction of sound, frame rates varied from 18 to 24 frames a second, although Thomas Edison felt that 46 frames a second was the minimum to eliminate “eye strain”. As long as films were silent, there was no need for a standardised film speed but The Jazz Singer was about to change all that.
    You ain't heard nothin' yet!

    Sound in film wasn’t anything new by the time the The Jazz Singer was released in 1927, there had been experiments with synchronised sound as far back as 1900. These early attempts used a sound-on-disc format but reliable synchronisation was difficult to achieve at the time and the amplification and recording quality was also inadequate. The difficulty in synchronising the sound on the disc with the images on the screen was addressed by using a mechanical interlock between a phonograpgh turntable and a specially modified film projector. The first feature-length film to be released with a sound sequence was D. W. Griffith’s Dream Street in 1921 but it was Don Juan in 1926 that was the first feature-length to have a synchronised sound track for the whole of its three-hour running time. The soundtrack consisted of music and sound effects only, with no recorded dialogue, but the following year The Jazz Singer included Al Jolson singing and two dialogue sequences. The Jazz Singer used Vitaphone which was a sound-on-disc system but that would soon be replaced by the more robust and easier to implement sound-on-film technology, although sound-on-disc did make a comeback with DTS in the early 1990s.

    There had been experiments with recording sound onto film as far back as 1907 when Frenchman Eugene Lauste had patented a method of transforming sound into light waves that are photographically recorded directly onto celluloid. However, it was an American inventor called Lee De Forest who successfully found a way of photographically recording the soundtrack on the side of the film strip to create a composite, or “married”, print. As long as proper synchronisation between sound and picture was achieved during recording, it could be guaranteed during playback. The first commercial screening of a series of shorts using sound-on-film took place in 1923 and by the 1930s the system had become the standard method of sound playback, and would remain so until the arrival of digital audio in the nineties. Thanks to the improvements in synchronisation and vastly superior recording and amplification technology, the “talkies” had arrived and movies would never be the same.

    The introduction of synchronised sound had another lasting impact, one that remains to this very day. Before sound, 18 frames a second was the supposed norm but cameras were often over or under cranked to achieve a certain effect and projectors were commonly run too fast to shorten running times. The use of a variable frame rate could no longer be tolerated once sound was introduced because the human ear is more sensitive to changes in audio frequency. Between 1927 and 1930 a standard frame rate was agreed that would allow for synchronised sound, retain audio quality, deliver relatively smooth images and keep costs to a minimum. That frame rate is 24 frames per a second and it remains the standard to this day, despite the best efforts of Peter Jackson. To eliminate flicker during projection, a simple two-blade shutter was used in the projector to produce a series of images at 48 frames per a second; later three-blade shutters were introduced, increasing the perceived frame rate to 72 frames per a second.
    One is so starved for technicolor up here.
    The first motion pictures were photographed on a simple silver halide photographic emulsion that produced a black and white image - that is, an image in shades of grey ranging from black to white. Early silent movies often sought to add colour by hand colouring, or tinting, individual frames but this early form of "colourisation" was time consuming and didn’t represent a true colour image. Between 1899 and 1935 there were dozens of natural colour systems developed, including one by William Friese-Green called Biocolour. The first colour cinematography was by means of additive colour, with Edward Turner patenting such a system in England in 1899. This system was successfully tested in 1902, whilst a simplified version was developed by George Smith and successfully commercialised in 1909 as Kinemacolor. These early systems used black and white film to photograph and project two or more component images through different filters. This process was practical because no special colour stock was necessary but the approach was subject to fringing due to the separate images not lining up.

    The other approach was subractive colour which used duplitised film to record both red and green, by bleaching away the silver and replacing it with colour dye, a colour image is obtained. The most famous example of this is Kodak’s Kodachrome which was first used in a narrative film in 1916. This led to the bipack colour system which used two strips of film running through the camera, one recording red, and one recording blue-green light. With the black-and-white negatives being printed onto duplitized film, the colour images were then toned red and blue, effectively creating a subtractive colour print. Finally this evolved into a process developed by Technicolor which used a beam splitter in a specially modified camera to send red and green light waves to separate black-and-white film negatives. From these negatives, two prints were made on film stock with half the normal base thickness, which were toned accordingly: one red, the other green. They were then cemented together, base-to-base, into a single strip of film.

    Whilst Technicolor’s process proved popular, it was very expensive to implement and colour photography could cost up to three times as much as its black and white counterpart. By 1932, general colour photography had nearly been abandoned by the major studios, until Technicolor developed a new advancement to record all three primary colours. Utilising a special dichroic beam splitter, equipped with two 45-degree prisms in the form of a cube, light from the lens was deflected by the prisms and split into two paths to expose each one of three black-and-white negatives - one each to record the densities for red, green, and blue. When combined, these three primary collectively rendered a wider spectrum of colour than previous technologies. Despite the popularity of Three-Strip Technicolor, the majority of films were still produced in black and white until the 1960s, thanks in part to the near monopoly that Technicolor held over the medium. In 1950 a Federal court ordered Technicolor to make its camera available to independent studios and filmmakers; although it was the invention of Eastmancolor that same year that signalled the beginning of the end for Three-Strip Technicolor.

    Eastmancolor was Kodak's first economical, single-strip 35mm negative-positive process incorporated into one strip of film. This rendered Three-Strip colour photography relatively obsolete, although for the first few years of Eastmancolor’s existence, Technicolor continued to offer their Three-Strip process. Whilst Hollywood studios waited until an improved version of Eastmancolor was released in 1952 before fully embracing it, once they did colour films became the norm and the modern colour film is still based on this subtractive color system, which filters colors from white light through dyed or colour sensitive layers within a single strip of film. It was the popularity of television that saw Hollywood increase colour production in order to combat the falling attendances but this didn’t help Technicolor because Eastmancolor proved an essential component in the next stage of film’s evolution - the move to a wider screen.
    The widescreen wars.
    In the early days of cinema, filmmakers used the entire area of the film negative between the two rows of perforations, which equated to an aspect ratio of 1.33:1. This was fine during the silent era but after the introduction of sound-on-film, the optical soundtrack reduced the amount of negative area creating an aspect ratio of about 1.19:1. This proved disorientating to audiences that were used to the 1.33:1 ratio and so studios began to reduce the height by decreasing the projector aperture in the cinema. This approach was somewhat confusing with each studio (and even each cinema chain) using slightly different aspect ratios. As a result, in 1930 a set of standards were agreed for sound-on-film movies, including a frame rate of 24 frames per a second and an aspect ratio of 1.33:1. In 1932 the Academy of Motion Picture Arts and Sciences (AMPAS) agreed on a new standard aspect ratio of 1.37:1, which has since been referred to as “Academy Ratio”. All films shot between 1932 and 1952 were shot in the Academy ratio and crucially, the first television systems also used a 1.33:1 ratio.

    As television increased in popularity, the studios began to look at ways of enticing audiences back by offering the kind of spectacle that television just couldn’t deliver. The first part of this counter attack was to increase the number of colour films being produced, thus providing an alternative to monochromatic TV broadcasts. The next stage was to make films wider, further differentiating cinema from television by using a different aspect ratio. The idea of widescreen filmmaking wasn’t new, Abel Gance had famously shot some of his 1927 epic Napoleon in Polyvision which used three synchronised cameras and projectors to create an aspect ratio of 4:1. This was the only time it was used but the technique formed the basis of Cinerama which also used three interlocked 35mm cameras to create an aspect ratio of 2.59:1. This was then projected onto a huge and deeply curved screen using three synchronised projectors. The first Cinerama film - This Is Cinerama - was only possible thanks to the introduction of better quality Eastmancolor film that same year but it proved a hit with audiences.

    Whilst the effect of Cinerama certainly impressed audiences in 1952 and 1953, it had some obvious limitations, not least of which was filming using three interlocked cameras and then projecting with three synchronised projectors. It was difficult to match and blend the three images, the presentation had a very limited sweet spot and close-ups were impossible. However the process had whetted the public’s appetite for widescreen filmmaking and thankfully a much simpler option was available - CinemaScope. This process used an anamorphic lens, an invention that had been around since the 1920s, to squeeze a widescreen image onto a single 35mm frame. When projected back the process was capable of an aspect ratio of 2.55:1, although this was reduced to 2.35:1 once the various sound tracks had been included. The first CinemaScope film was The Robe in 1953, originally planned to be shot in Three-Strip Technicolor but instead shot with Eastmancolor film stock and CinemaScope lenses. Although CinemaScope itself was abandoned after 1967 in favour of Panavision anamorphic lenses, the term ‘scope’ is still used today to refer to films shot in a 2.39:1 aspect ratio. Within months of the arrival of widescreen movies, all the major studios started matting their non-anamorphic films in the projector to wider ratios such as 1.66, 1.75, and 1.85. The latter ratio is, along with anamorphic 2.39:1, considered the standard for cinema projection today.

    However, it wasn’t just the aspect ratio that was getting wider, the camera negative itself was also changing. Paramount had been the first to make changes to the negative itself, eschewing the move to CinemaScope in favour of their own VistaVision process. This ran the film sideways through the camera, creating a larger negative with an aspect ratio of 1.85:1 that could be projected onto a very big screen without excessive film grain. Others were looking to increase the size of the negative still further, Panavision's first system was developed in conjunction with MGM and used both an integrated anamorphic lens and a 65mm camera negative, it was called MGM Camera 65. The first film shot and projected using MGM Camera 65 was Ben Hur and one can only wonder at how that must have looked when projected in 70mm at an aspect ratio of 2.76:1. When the MGM Camera 65 production of Mutiny on the Bounty went wildly over budget, MGM were forced to sell their camera division to Panavision and the system became Ultra Panavision. However, there were also competing systems that didn’t use anamorphic lenses and just took advantage of the larger camera negative, these include Todd-AO and Super Panavision which both used a 65mm negative and an aspect ratio of 2.2:1. Whilst only a handful of full length feature films have been shot in 65mm over the last forty years, the resurgence in popularity of IMAX - which is also a large camera negative format - shows that there is still an audience big screen spectacle.
    Comin' at ya!
    Incredibly there were early tests with 3D filmmaking as far back as 1890 when our old friend William Friese-Green experimented with projecting two images side-by-side and in 1900, Frederic Ives patented his stereo camera rig. The principal of 3D is fairly simple - if you can create a left and a right eye image, and the find a way of ensuring that each eye only sees the image intended for it, the brain will interpret this as a three dimensional image. The tricky part, especially in the early days, was interlocking the two cameras for filming and then synchronising the projectors during playback. Once you’d figured out that, you then had to find a way of ensuring that each eye only saw the image intended for it. The first confirmed 3D film that was projected for an audience was The Power of Love, which was shown in September 1922. This screening used a projected dual-strip in the red/green anaglyph format and was the first film to use both of these technologies. This started the first 3D craze in the 1920s with a number of films using the red/blue anaglyph system, the only time this approach was used. Despite the enduring popular image of rows of cinema goers wearing red/blue anaglyph glasses, all the other 3D booms have used polarised lenses.

    The first use of polarised lenses for 3D presentation was in the 1930s but it was in the 1950s that the second major 3D craze happened. By now 3D projection used the dual-strip format with two synchronised projectors with a polarised filter over each lens and polarised glasses with the same filters for the left and right eye. The period from 1952-54 is often referred to as the “golden era” of 3D filmmaking, with famous movies like House of Wax,The Creature from the Black Lagoon and Dial M for Murder. However the craze soon died out and in fact Dial M for Murder was primarily only seen in 2D on its initial release, although it got a 3D re-release during the third 3D boom in the 1980s. This particular boom seemed to be driven less by an advance in technology and more by the release of the third film in a number of popular series such as Jaws 3-D, Amityville 3-D, Friday the 13th - Part III and, er,Emmanuelle IV. The popularity of 3D seems to go in 30 year cycles - 1920s, 1930s, 1950s and 1980s, so it should come as no surprise that the format is currently enjoying another period of success, kicked off in 2009 by the release of Avatar. The difference this time is that thanks to the digital revolution shooting and projecting 3D has become much easier and far more native 3D films have been produced in the last 5 years than in all the other booms combined.
    Making films sound better.
    In the early days of the ‘talkies’ the sound was obviously mono but as the recording and amplification technologies improved, filmmakers quickly wanted to add more channels. Walt Disney was a big advocate of multi-channel sound and flushed with the success of Snow White, he tasked his engineers with developing a stereophonic sound system for his 1940 animated film Fantasia. The film was released in selected in venues in what was christened ‘Fantasound’ and used a multi-channel soundtrack recorded on a separate piece of film that was synchronised with the image itself. The result was the first stereophonic feature film but the format was short lived, partly due to the cost of installing all the equipment but also because of America’s entry into the Second World War. However, the development of multi-channel sound continued and in the fifties magnetic sound-on-film became popular because it offered better sound quality than optical and could include up to six tracks on the film. It was, however, more expensive than optical and less robust and was largely limited to roadshow presentations and 70mm prints. There had also been experiments with optical stereo soundtracks but until the 1970s there was too much noise to make this type of approach viable.

    Then along came Dolby, who adapted their noise reduction technology to make stereo optical soundtracks a viable alternative to magnetic ones and in 1971 Stanley Kubrick’s A Clockwork Orange became the first film to use Dolby noise reduction throughout its production. In conjunction with Eastman Kodak and RCA, Dolby also developed their Stereo Variable Area (SVA) technology, an optical method that offered stereo sound by using two variable width lines in the space that was originally allocated for one. In 1974 Callan became the first film to use a Dolby encoded optical soundtrack but Dolby soon realised that with two higher quality optical tracks on a piece of film, there was the opportunity to include additional channels using a process called matrixing. This allowed a centre channel and a mono surround channel to be encoded along with the front left and right channel - the era of mass market surround sound had arrived. The first film released in Dolby Stereo was Ken Russell’s Lisztomania in 1975, although it only used an LCR (Left-Centre-Right) encoded soundtrack. The first film to have a full LCRS (Left-Centre-Right-Surround) soundtrack was A Star is Born in 1976, although it was Star Wars the following year that really put Dolby Stereo on the map, making it the de facto film sound system for the nearly twenty years.

    The digital revolution in filmmaking started in the 1970s and picked up pace in the 1980s but it was in the nineties that it really had an impact and one of the first areas where it did was sound. The first digital multi-channel surround sound format was Cinema Digital Sound (CDS) which replaced the analogue audio tracks on a film with a discrete 5.1 soundtrack encoded using 16-bit PCM audio. A number of films used CDS in the early nineties, most famously Terminator 2: Judgement Day but the system had no analogue back-up, so if the digital soundtrack was damaged there was no audio. As a result it was superseded by Dolby Digital when it was launched in 1992 because Dolby’s format moved its digital information to another area (in between the film sprocket holes), preserving the optical tracks. The first film released with a Dolby Digital soundtrack was Batman Returns but the new format wouldn’t have the digital audio pie all to itself for long.

    In 1993 the release of Last Action Hero saw Sony launch SDDS (Sony Dynamic Digital Sound), whilst the Steven Spielberg blockbuster Jurassic Park helped launched DTS (Digital Theatre Systems). The digital information for SDDS was recorded along both outer edges of the 35mm release print and supported up to eight channels with five along the front. Conversely DTS used a time code on the release print that synchronised with the soundtrack, which was encoded on a separate CD-ROM - taking us full circle and back to Vitaphone and The Jazz Singer. In the piece of film above the SDDS soundtrack is the blue area to the left, the Dolby Digital information is between the sprocket holes, the optical soundtrack is the two white lines to the right of the sprocket holes and the DTS time code is the dashed line running down the far right. Dolby Digital and DTS are still going strong, although DTS is now called Datasat, but SDDS is generally regarded as a dying format.
    Into the digital realm.
    The move to digital filmmaking hasn’t just advanced 3D production, it has affected every aspect of modern cinema, from shooting to post-production and from soundtracks to projection, the digital revolution has fundamentally changed the movies. In fact film itself will soon go the way of the dinosaurs that Jurassic Park’s effects wizards so brilliantly brought to life on their computers, ushering in a new era of digital effects.

    The use of computer generated effects in movies goes back to West World in 1973, which included 2D computer animation to create the POV of Yul Brynner’s android gunslinger. Appropriately enough it was that film’s sequel,Future World, included the first 3D computer animation, three years later. Whilst the effects in Star Wars were optical, the motion control camera that was used wouldn’t have been possible without the addition of a computer and by 1978 ILM were experimenting with computer generated X-Wings. Although Disney heavily promoted the computer generated images in 1982’s Tron, a lot of it was in actual fact traditional hand drawn cell animation. However it was still a landmark film and that same year ILM produced the ‘Genesis Effect’ for Star Trek II, which was the first fully CG sequence in a theatrical feature film. By 1984, The Last Starfighter was using CG to create real world objects and the ‘stained glass knight’ in 1985‘s Young Sherlock Holmes was the first photorealistic CG character. In 1988, the film Willow included the first morphing effect and the following year ILM created the water tentacle for James Cameron’s The Abyss. The success of the water tentacle gave Cameron the confidence to create the liquid metal villain for Terminator 2 and that in turn gave ILM the confidence to create the photo-realistic dinosaurs in Jurassic Park. That led to Toy Story in 1995 and the rest, as they say, is history.

    It wasn’t only sound and effects that went through a digital transformation in the 1980s and early nineties, the entire post production process was rapidly changing. Francis Coppola has advocated what he termed ‘electronic cinema’ back in 1982 when he shot One from the Heart using a central control area with video feeds from the set. This was precursor to the ‘video villages’ found on many sets today and Coppola, along with his friend George Lucas, could see the potential of using video for storyboarding, animatics, rehearsals and even editing. The idea of converting film to digital files for non-linear editing was pioneered by Lucasfilm, who initially developed the EditDroid, an analogue system that used Laserdiscs. This technology was eventually sold to Avid, who developed the digital non-linear editing tools that are in common use today. Along with editing digitally, compositing also transitioned to the digital realm, as did other traditional techniques such as matte painting, thanks again to the pioneering work at ILM. In fact the influence of George Lucas on digital cinema can’t be understated, it was the computer graphics department at Lucasfilm that ultimately became Pixar and it was Lucas who pioneered digital projection in 1999 with test screenings of The Phantom Menace.

    By the beginning of the 21st century, as cinema itself passed its own centenary, just about the only element that remained analogue was the film in the camera. That strip of 35mm celluloid passing through the gate at 24 frames a second had remained largely unchanged since the late 1920s but even that was about to change. Naturally it was George Lucas who pushed the technology of digital film capture, experimenting back in 1997 with one scene in The Phantom Menace and shooting all of Attack of the Clones on digital cameras in 2000. A piece of 35mm film reacts to the light that hits it at a chemical level, giving it a unique image, whilst a digital camera uses a sensor composed of pixels, which is more precise. As the technology improves with higher resolution, a wider colour space and a better dynamic range, so more and more filmmakers have embraced digital photography. The use of digital cameras also makes the entire process so much easier, with immediate playback and a far greater ability to manipulate the image as a digital intermediate. The irony is that despite the innovations made in digital photography and the inevitable decline in the use of actual film, it still influences the aesthetic choices of cinematographers to this day. Whilst higher frames are easily achieved on digital cameras, most filmmakers prefer the motion provided at 24 frames a second. Despite the clean images produced by digital cameras, filmmakers often add artificial grain to achieve a film-like effect. The simple fact is that however much things change, the more they stay the same and after all these years, filmmakers are still just turning money into light.

    To comment on what you've read here, click the Discussion tab and post a reply.

    Share This Page