The technical trajectory of security cameras was not an overnight success but a cross-disciplinary evolution spanning two centuries. Its roots can be traced back to the late 19th century with the first attempts to capture continuous dynamic images. In 1870, English inventor Wordsworth Donisthorpe patented the "kinesigraph," a moving picture camera designed to take a series of photos at set intervals to capture movement.1 In 1889, Donisthorpe and Louis Le Prince further refined film cameras and projection technology; Le Prince even developed a 16-lens camera, which, while more of an experimental tool at the time, laid the physical foundation for continuous monitoring in specific spaces.1
The first true Closed-Circuit Television (CCTV) system was born out of military needs during World War II. In 1942, German engineer Walter Bruch was tasked with designing and supervising a system to monitor A4 (V-2) rocket launches from a safe bunker.1 The core of this system was its "closed-circuit" nature, meaning video signals were transmitted only to preset, non-public monitors. Imaging technology at the time relied entirely on bulky vacuum tubes and complex analog circuits, with no means of recording. Security personnel had to watch the monitors in real-time, as information was lost forever once the image disappeared.2
In 1949, the American company Vericon launched the first commercial CCTV system, marking the transition from military to commercial and civilian sectors.3 These early commercial systems primarily used fixed black-and-white cameras connected via coaxial cables. Due to the high heat, high power consumption, and 110V AC requirements of vacuum tubes, installation was strictly limited, often requiring the camera to be within 6 feet of a power outlet.5 Furthermore, optical performance was extremely limited, with resolutions around only 240 lines.
Before semiconductor imaging technology matured, vacuum tubes (Pick-up Tubes) were the sole core of security cameras. These devices were essentially cathode-ray tubes (CRT) running in reverse. In the 1950s, RCA's Weimer, Forgue, and Goodrich developed the Vidicon, a storage-type camera tube using a photosensitive semiconductor (initially antimony trisulfide) as a target.7
The working principle of a camera tube involves focusing a scene onto a photosensitive target via an optical lens, which is then scanned by a low-velocity electron beam from an electron gun. When light hits the target, local conductivity changes, causing the electron beam current to fluctuate and converting light into video signals.8 The Vidicon significantly reduced camera size and cost, making it the standard for non-broadcast surveillance.7
However, the Vidicon suffered from a fatal "burn-out" defect. If pointed at the sun, highly reflective surfaces, or bright light points for too long, the photosensitive target would suffer permanent physical damage, creating "blind spots".8 Additionally, Vidicons were susceptible to the "microphonic effect," where loud noises or explosions caused physical vibrations in the thin-film target, producing horizontal bars on the screen.8
To overcome the low sensitivity and severe "trailing" (comet tails) of the Vidicon, Philips introduced the Plumbicon in the 1960s. Using lead oxide as a target, the Plumbicon offered high signal-to-noise ratios and extremely low image lag.7 While successful in broadcasting, its high cost limited its use in security to high-end applications. It wasn't until the late 1970s, with the evolution of low-light technology like the Tivicon (silicon diode tube) and Newvicon (produced by Panasonic), that vacuum tubes met the basic needs of nighttime monitoring.10
The table below summarizes the evolution of early vacuum tube security cameras:
| Technical Phase | Core Sensor | Representative Year | TV Lines | Key Features | Limitations |
| Initiation | Early Photoelectric Tubes | 1942 | 100-200 | Military use, real-time observation |
Extremely bulky, no recording 4 |
| Commercialization | Vidicon | 1950s | 240 | Simple structure, cost reduction |
Easy to burn out, low sensitivity 7 |
| Performance Boost | Plumbicon | 1960s | 400+ | High SNR, low lag |
Very expensive 8 |
| Analog Peak | Newvicon/Saticon | 1970s | 480-700 | Early low-light capability |
Still large, AC power dependent 10 |
1969 was a milestone in modern imaging history. Willard Boyle and George Smith at Bell Labs invented the Charge-Coupled Device (CCD), an achievement that later earned them the Nobel Prize in Physics.13 The CCD revolutionized security camera hardware, replacing fragile vacuum tubes with solid-state silicon chips.13
The working principle of a CCD can be compared to an "array of buckets collecting rainwater." Each pixel (silicon atom) on the sensor acts like a bucket collecting photons (raindrops). The photoelectric effect converts photons into photoelectrons, which are stored in potential wells. During the readout stage, these charges are moved row-by-row like a relay race to a readout amplifier and converted into voltage.13 The advantage of CCD lies in its high image uniformity and low pattern noise, as all pixels usually share one to four readout amplifiers, ensuring consistency.13
Fairchild Semiconductor launched the world's first commercial CCD, the MV-100, in 1973, with a resolution of only 100x100 pixels.14 Though initially intended for industrial and military use, it paved the way for "pocket-sized" security cameras.16 Sony invested a staggering 20 billion yen in R&D throughout the 1970s, eventually commercializing the XC-1 color CCD camera in 1980.18 This move, considered a suicidal gamble at the time, established Sony as the dominant force in the global image sensor market for decades.19
During the CCD's reign in the 1980s and 1990s, internal camera electronics also underwent radical changes. Printed Circuit Board (PCB) technology moved from phenolic paper to fiberglass substrates, greatly enhancing thermal stability and signal integrity.6 In the 1970s, PCBs supported only single-sided wiring; by the 1980s, double-sided PCBs allowed more signal processing components (like early video processors) to be integrated into small camera housings.6 During this period, security systems used coaxial cables to transmit analog signals, with resolution reaching the physical limit of analog technology—approximately 700 TV lines (TVL).5
While CCD led in image quality for a long time, its complex manufacturing, high power consumption, and inability to integrate logic circuits limited further camera intelligence. In the mid-1990s, Complementary Metal-Oxide-Semiconductor Active Pixel Sensor (CMOS APS) technology began to mature.13
Unlike the "serial readout" of CCD, each pixel in a CMOS sensor has its own amplifier and readout circuit. This architecture provides multiple technical advantages:
High Integration: Image Signal Processors (ISP), Analog-to-Digital Converters (ADC), and timing control circuits can be integrated onto the same silicon die, forming a System-on-Chip (SoC).21
Ultra-High Speed: With thousands of readout channels, CMOS speeds can be 100x faster than CCD, enabling high-frame-rate monitoring (60fps or higher) and slow-motion playback.13
Power Control: CMOS consumes significant power only during pixel switching, drastically reducing heat—a critical factor for 24/7 security operations.13
In 2007, CMOS reached market parity with CCD, and by 2019, with the popularity of Back-Illuminated (BSI) technology, CMOS performance surpassed CCD.13 BSI reorders the sensor layers so light hits the photodiode before the circuit layer, drastically increasing Quantum Efficiency (QE) and laying the groundwork for "Starlight" surveillance.14
The table below compares CCD and CMOS in modern security applications:
| Parameter | CCD Sensor | CMOS Sensor (APS) | Impact on Trends |
| Readout Speed | 1 - 40 MPS | 100 - 400+ MPS |
Enabled HD video streaming 13 |
| Read Noise | 5 - 10 electrons | 1 - 3 electrons |
Improved low-light clarity 13 |
| Dynamic Range | High (Full-frame) | Extremely High (HDR) |
Facilitated WDR breakthroughs 15 |
| Cost | High (Specialized lines) | Low (Standard CMOS) |
Drove camera democratization 13 |
| Integration | Low (External chips) | High (Single-chip SoC) |
Led to Edge AI cameras 22 |
If the sensor is the "retina" of a camera, the lens is its "crystalline lens." In security, lenses must maintain resolving power across highly variable environments.
Early monitoring lenses were mostly spherical. The physical nature of spherical lenses means that light rays at the edges and center do not converge at the same point, causing spherical aberration and edge blur.26 To solve this, security lenses began mass-adopting aspherical elements. Although the theory was proposed by Descartes in 1637, it wasn't until the 1980s that precision glass molding made mass production possible, allowing for larger apertures (F/1.4 or F/1.0) without sacrificing clarity.27
In the 1970s, the need for flexible viewing angles led to the birth of zoom lenses. However, traditional zoom lenses often lose focus during focal length changes. To ensure clarity, the industry developed "Back-focus Adjustment" mechanisms to keep the focus locked on the sensor plane from wide to telephoto ends.29 Modern motorized zoom lenses incorporate precision stepper motors to automatically adjust the field of view based on alarm triggers.26
As sensor resolution jumped from 0.3MP to 8MP (4K), the flaws of traditional auto-iris lenses emerged. Conventional DC-irises only adjust opening size based on brightness. in bright environments, the iris closes so tightly that it causes severe diffraction, blurring the image—a phenomenon known as the "optical limit".30
To counter this, Axis Communications introduced P-iris (Precise Iris) technology. P-iris does not rely solely on light sensors; it uses software to communicate with a stepper motor in the lens.
Optimal Aperture Selection: The software identifies the lens's "sweet spot" (usually a mid-range F-stop) and maintains it as much as possible.30
Gain and Exposure Linkage: When light is too strong, the system prioritizes shorter exposure or electronic gain reduction rather than closing the iris excessively, thus avoiding diffraction.30
Maximized Depth of Field: For scenes like long corridors, P-iris optimizes the depth of field to ensure both foreground and background remain clear.33
Raw data from the sensor must be processed by an Image Signal Processor (ISP) to be viewable. The evolution of the ISP is what turned security monitoring from "seeing" to "seeing clearly and accurately."
In backlit scenes (like a bank window), the difference between bright and dark areas can exceed 100,000x. ISPs handle this through three main methods:
Digital WDR (DWDR): A software algorithm that adjusts gamma curves to brighten dark areas. Low cost but high noise.35
True WDR (Multi-exposure Fusion): The mainstream high-end solution. The ISP instructs the sensor to take two frames in rapid succession: one short exposure (highlights) and one long exposure (shadows). Pixel-level registration then fuses them seamlessly.36
Forensic WDR: An optimized version for reducing motion artifacts, ensuring that moving objects don't have "ghosting," which is critical for license plate recognition.25
The Signal-to-Noise Ratio (SNR) in ISP algorithms can be described by:

The final frontier for security is darkness. Traditional IR night vision results in the loss of color, making it impossible to identify clothing or vehicle colors.40
Starlight success relies on pushing physical limits:
Large-Format Sensors: Using 1/1.8-inch or even 1/1.2-inch sensors. This increases the light-receiving area per pixel, capturing more photons.39
Ultra-Large Aperture Optics: Equipped with F/1.0 or F/0.95 lenses, providing 4x the light intake of standard F/2.0 lenses.26
Slow Shutter Algorithms: Stacking frames in the ISP to increase integration time. While this introduces some motion blur, it produces day-like color images in 0.001 Lux environments.24
When light drops below 0.0001 Lux, gain alone is insufficient. Manufacturers like Hikvision (DarkFighter X) and Keda launched Blacklight technology, which mimics the human eye's rods and cones:
Optical Splitting: A specialized prism splits light into infrared and visible paths.44
Dual Sensors: One sensor captures IR (luminance and detail), while the other captures weak visible light (color).
Pixel-Level Fusion: The ISP fits the two paths in real-time, outputting bright, full-color, low-noise video. This requires sub-pixel calibration accuracy.44
Modern monitoring is moving beyond a single perspective toward multi-sensor fusion platforms.
To cover expansive areas like squares or airports, Hikvision's PanoVu series integrates 4 to 8 sensors. ISP algorithms perform "seamless stitching," which includes:
Exposure Consistency: Ensuring brightness is uniform across all sensors.45
Pixel Registration: Eliminating blind spots and ghosting at the seams.45
Multi-Directional Monitoring: One IP address and one cable can manage a 360-degree view, reducing system costs.47
Computational imaging is blurring the line between hardware and software.
Smart Hybrid Light: Cameras like Hikvision's Smart Hybrid Light use AI to switch from discreet IR mode to white-light color mode when a person or vehicle is detected.41
Multi-Spectral Fusion: Fusing thermal (LWIR) and visible light. Thermal detects heat (hidden targets), while visible identifies them, greatly improving perimeter protection accuracy.51
Looking ahead to 2030, the form of security cameras will undergo another qualitative change.
Research suggests that "lensless cameras" based on computational optics are maturing. By using thin optical encoders instead of glass lenses, cameras can become as thin as stickers.20 Furthermore, Single-Photon Avalanche Diodes (SPAD) will allow imaging in zero-light (photon-counting) conditions.20
By 2030, cameras will not just be visual tools:
Biometric Monitoring: Using long-range laser Doppler vibrometers to capture heartbeats and breathing.55
Emotion Analytics: Deep neural networks will parse micro-expressions and body language to perform "intent prediction" before a crime occurs.55
Edge Autonomy: With 5G/6G and low-power AI chips, cameras will act as "digital guards," performing all analysis locally and uploading encrypted data via quantum protocols.3
The evolution of security cameras is a history of humanity's endless pursuit of "visibility." From a 1942 bunker machine to today's AI-powered terminal with pixel-level fusion and color night vision, every step has been a triumph over physical limits. Lenses moved from spherical to aspherical and irises from manual to P-iris; sensors moved from bulky tubes to BSI CMOS and toward quantum sensing; PCB technology moved from simple connections to high-performance SoC platforms.
The future of security will not be a collection of cold hardware but a fusion of physics, semiconductors, and AI. While guarding society, the true challenge for the next decade will be finding the balance between technological progress and privacy ethics.