Rocco’s blog

Just another myblog.arts site

Personal Research

BEEP

Personally, I am extremely interested in in-game sound design. I’m fascinated by how these composers can create an atmosphere that never ends, unlike a movie which has the pacing and a set path. The documentary called “beep” is about the history of game sound. This blog will go into the history of sound design up to 2010.

For the origin of game sound, we can go back to the penny arcades at the end of the 19th century. Winning a game was normally signalled by the ringing of a bell, to attract attention. Most machines were based on luck, meaning it was a form of gambling and was against the law in some places. To counter this, they had music play when a coin was inserted since they could now claim “they are paying to listen to the music.” They then started advertising them as music machines, not games. 

In the 1930s, arcades were introduced to pinball. These machines used bells to create chimes and the classic pinball sound is created with an electromagnet under a few chimes, hitting them. Even in the 30s, these machines were electromechanical.

In the 80s Pinballs were controlled with processors instead. The new noises were designed with code. In the 90’s they were using sampled effects, full songs, and other noises to accompany the game.

Due to the invention of microprocessors, pinballs now had to compete with video games. Video game sound design was very simple originally and was also made with code. One consideration the sound designers in the 80s had to make was that arcades played loud music all the time, so most of the sound design was barely heard. This means the sounds played were short and normally high pitched. 

Originally there was no digital audio but there was analogue audio. Oftentimes, the audio was made by taking a diode, reversing it and applying current then amplifying it, making an explosion noise. When microprocessors came around, the software was invented.

Japan had a unique piece of custom hardware that used square waves. Square waves are extremely simple in digital software because it’s either on or off. This piece of hardware would be programmed and worked with the microprocessor.

With this hardware, you would have 3 channels for instruments and 1 noise channel, normally for percussion. If a sound effect played, it would take over one of these channels so most of the time you had to work with 2 channels. This new technology had its own features, it could play as fast as you want, go as high and as low as you want, it could arpeggiate as fast as possible or switch channels.

Sound and music had to be programmed. Composers would take notes and put them into a system that started with 0 and ended with F, 0 1 2 3 4 5 6 7 8, A B C D E F.

Because of the success of arcades, companies started making consoles for the home. Most of these had programmable sound chips like the arcades, but each console had a different sound chip, meaning every console had its own sound. In the Atari (TIA chip) it only had 5-bit resolution and 32 notes which are not tuned, and changes from the instrument you choose. It got these notes by dividing an oscillator by the number you put in and it didn’t have enough bits for fine-tuning.

The Intellivision’s sound chip was more advanced, had 3 square waves and a noise generator. It had a built-in single note player.

After this, the NES was invented, with another unique sound chip. It allowed variation in the types of waveforms used, It also added the ability to use short, low-resolution samples and had 5 channels of instrumentation. There were limitations, the percussion always had varying amounts of noise and the bass was predetermined.

One difference between game music and film music is that game music has to be compressed onto a file, to do this the music was looped to make an infinite song. But you had to make it not sound like a loop. At this stage, game music could be compiled onto an album.

The commodore 64 used lots of looping sounds. The commodore 64 had an inbuilt synthesiser so could be used to make music and was good to compose music on. The chip had 3 channels of voices which were simple waveforms and had a filter over the channels. Each console had slightly different chipsets, meaning the audio on one would be different to another. Another issue was RAM, there was only 6KB of RAM dedicated for music but had 64KB overall, that’s why it was called the commodore 64.

Most dedicated PCs did not have an impressive sound system compared to the game consoles at the time. Most songs had to be 1KB so the game could fit as well, most songs now are around 10MB (that’s 10000KB!).

IBM made the first PCs which had a small speaker which signalled when the PC was turned on. The sound was limited because it didn’t need it at the time. In 1985, they started to create more complex sound systems for these PCs.

After this, a new form of synthesis, which would progress sound design, was invented. Frequency Modulation (FM) Synthesis. Use additional waves to modulate a single wave to create complex tones. The first FM synth would make their way to the arcade in Marble madness (1984).

FM would also make its way onto the SEGA Genesis. FM needed a special driver and equipment to be properly used. There were 5 channels for FM, 3 channels for PSG (Programmable Sound Generator), 1 for noise and 1 for the PCM (Pulse Code Modulation). This means 10 channels were available and each channel could only play one note.

Adlibs FM cards were a PCB (Printed Circuit Board) that could be slotted into a PC. When this was first released, it gained barely any attraction except the sound industry. But many companies took notice of this such as creative labs (known as creative technologies at the time). The Adlib FM synthesiser was the most advanced FM PCB on the market and creative labs had an inferior product that they were selling, but eventually upgraded it and was sold as the Sound Blaster. Before this, all PC games had no sound. The sound blaster had digital sound capabilities and a synthesiser. The Yamaha chip used in the sound blaster was not designed for sound purposes and received a lot of flak because of the lack of sound quality that was outputted. 

More MIDI sound cards were released after this and all had to have General MIDI, a way for an instrument sound to be played back on any device. The MT-32 card, designed by Roland, was an improved version of the Sound Blaster. It was originally designed for desktop music creation, but also could be used in games, the only other products on the market like this were the Adlib and Sound Blaster. 

As time progressed, sampling became more common. However, the resolution of the sample was greatly degraded on original hardware. Sound designers had to compose in a new way before they had to create “bleeps and bloops” but now there was a possibility for so much more and they had to learn how to make new music. 

The omega had 4 channels of sampled sound and allowed a new form of composition. The SNES had 1 channel of samples, but it didn’t have enough RAM to manage samples. 

IMUSE was created by LUCAS Arts. This was an interactive music software that was non-linear and could allow new unique forms of composition. Monkey Island 2 first used this software. 

CD ROMs started to be produced and had higher audio quality. Red book audio was used to record actual instruments and fully mastered pieces. This required higher production value for game music. The memory had limitations, which was the hard part. You would always lose quality and bitrate, it’s improved from older consoles but still not perfect. 

In the 90s, games became more advanced. Switches from limited synthesis to CDs. Yamaha, Roland and Korg were releasing more synthesisers for this. Due to samples, you could now record any instrument.

Although the music moved forward, programmed synthesis was left behind. Now game composers were in competition with musicians, film composers and orchestral scores. Another part of this progression is that loops were not used as much due to the increase of data storage on CDs, but this also opened up new issues. 

The PlayStation 2, Gamecube and Xbox came out in the early 2000s. Xbox put many resources into their sound quality. When they bought Bungie (creators of the Halo series), the audio director (Martin O’Donnell) wanted to add surround sound in games, making themselves different on the market. Halo was the first game title to have full 5.1 surround sound. Using film theory, music should be in time with visuals, but it’s impossible for that to happen in a game like Halo. To counter this issue when in combat, music would stop playing and out of combat, it would come back.

In 2008, the app store and Facebook were introduced. This allowed a huge burst of games and audio design. Working on these constraints was a unique challenge. Despite iPhones being stereo, the speakers are so close to each other and too small for stereo panning to be audible, so it’s more like double mono. 

Voice-overs. Many years ago games stories were very bare and this allowed the voice-overs (VOs) to be, what we consider now, lacking. But nowadays, most games have extremely complex stories in an enveloping world, making everything seem human is essential and bad VOs would not be able to encapsulate you in the world they’re making. 

Leave a Reply

Your email address will not be published. Required fields are marked *