How arcade games faked depth before 3D hardware.
Before polygon hardware, raster displays could draw pixels fast but couldn’t describe space. A raster screen is a grid of colored dots. It has no concept of near or far, no z-axis, no geometry. Drawing something that looks three-dimensional on a fundamentally flat display requires manufacturing the illusion from scratch, every frame, using tricks that have nothing to do with actual spatial representation.
Vector displays briefly offered a different path. Battlezone in 1980 drew genuine 3D by steering an electron beam directly rather than scanning line by line, allowing real perspective projection, real geometry, and real depth, albeit in green-on-black wireframe. It worked, and it was technically impressive, and it was a dead end. Vector hardware couldn’t do full color, couldn’t do filled surfaces, and couldn’t scale to the visual complexity players wanted as expectations rose. Rasterization was always going to win because it could do everything else. The problem was just that it couldn’t describe space, so developers had to fake it.
Sega spent roughly fifteen years solving this problem, and the way they approached it reveals something about how hard the problem actually was: they didn’t solve it from one direction. They split it in two and attacked from both ends simultaneously, building parallel hardware families that faked depth through completely different mechanisms.
The Sega System line of arcade hardware, including the System 16, System 18, System 24, faked depth by manipulating the background. The hardware supported multiple tilemap layers that could scroll independently, combined with per-scanline register updates that let developers change scroll position, palette, and other parameters for every horizontal line as the raster beam drew down the screen. The result was line scrolling: roads that curved, horizons that moved, backgrounds that appeared to have genuine perspective because different horizontal bands of the image were offset by different amounts. OutRun is the purest expression of this. The road in OutRun isn’t geometry. It’s a flat tilemap being distorted line by line, with the distortion parameters calculated to approximate what a perspective projection of a curving road would look like.
Depth here came entirely from distortion. The world was still flat. It just didn’t look it.
The Super Scaler line, including Space Harrier, After Burner, and Galaxy Force, attacked the problem from the opposite direction. Instead of bending the background, these boards weaponized the foreground. The Super Scaler hardware could render enormous numbers of sprites at multiple scales simultaneously, using dedicated scaling hardware that the CPU didn’t have to manage. Objects could rush toward the player and grow rapidly, shrink as they receded, and be sorted by size to approximate depth order. The Y Board used in Galaxy Force and Super Monaco GP pushed this further with a true rotating playfield and three-layer sprite priority system. The “world” in these games stopped being a place with geography and became a stream of objects appearing to flying at the player, with scale standing in for distance.
Speed and scale sold depth better than scenery ever could, because the game didn’t give you time to notice the floor was flat.
Both approaches were solving the same problem from opposite ends. System boards wove the illusion into the background layer, bending flat tilemaps into apparent space. Super Scaler boards manufactured depth through foreground velocity and scale, making the absence of real geometry irrelevant by overwhelming the player’s ability to examine it. Neither approach described space. Both faked the camera.
The ceiling for sprite-based depth came with System 32 in 1992. With Rad Mobile, Golden Axe: The Duel, and Stadium Cross, the System 32 combined the techniques that had previously been mutually exclusive. Multiple high-resolution tilemap layers. Per-scanline line scroll across all of them. Large sprites with hardware scaling and rotation. The full toolkit in one platform, with enough processing headroom to use all of it simultaneously. At that point the limiting factor for sprite-based fake 3D was no longer hardware capability. It was art budget and tuning time. System 32 was the final form of the illusion, the point where every available trick was available at once and the question became how well your team could use them.
Then the ground shifted from outside the Sega family entirely. Namco’s System 21 in 1988 was drawing actual polygons. Objects that existed in genuine three-dimensional space, transformed by real projection matrices, with a depth buffer determining what was visible. This wasn’t a better illusion. It was a different model. The camera wasn’t being faked. It was real, in the sense that it had an actual position and orientation in a space where objects had actual coordinates. Namco followed with System 22 and the hardware that ran Ridge Racer and Ace Combat. The gap between what polygon hardware could do and what sprite scaling could approximate widened fast.
Sega followed with the Model series, including the Model 1 in 1992 running Virtua Fighter and Virtua Racing, the Model 2 in 1993, and Model 3 in 1996 running Virtua Fighter 3 and Scud Race. It wasn’t technically Sega’s first 3D hardware, but it was their first attempt that resembles what we now recognize as a modern rendering pipeline. Geometry and transforms became first-class concepts. The background stopped being a manipulated tilemap and became a rendered scene. The camera stopped being a trick and became a thing with a position.
The fake-3D era looks like a detour from the perspective of where graphics went, but it wasn’t. The techniques developed across fifteen years of sprite scaling and line scrolling, including how to suggest depth through scale, how to use per-scanline distortion to imply perspective, and how to manage foreground and background layers to create the appearance of three-dimensional space, directly informed how early polygon hardware was used. The first polygon games look the way they do partly because the designers building them had spent careers manufacturing depth from flat hardware and knew exactly what the eye was willing to accept.