Why Camera Interfaces Are Critical to ADAS System Design

Why Camera Interfaces Are Critical to ADAS System Design

The car is quickly being changed into a securely connected self-driving robot with the capacity to sense the environment, think about it, and then act autonomously. And perhaps changing even more quickly are small autonomous public vehicles — taxis/ride shares and buses that will take us where we want to go from mass transit stops (the last mile) or within a downtown or office campus area.

One example is the NAVYA ARMA autonomous electric bus that was launched back in October of 2015. The small bus can safely shuttle up to 15 passengers at up to 28 mph with no driver. It’s being tested or used in a number of communities in Europe and the U.S. and was demonstrated on the streets of Las Vegas during CES 2017.

Seeing the Surroundings

Be it a car or a bus, autonomous vehicles need cameras, radar, and maybe lidar to sense the full environment around them. Using a combination of these sensors, the vehicles’ advanced driver assistance system (ADAS) can detect the world on all four sides of the vehicle. Multiple video cameras—as few as five, but often as many as eight—are key to the system. The front- and the rear-mounted cameras need the sensitivity and quick response to aid cross-traffic and collision detection and are fast becoming standard equipment on many cars and SUVs. The combination of all the surround-view cameras provide reliable information for emergency brake assist, adaptive cruise control, blind spot detection, rear cross-traffic alert, lane departure warning/automatic lane keeping, and, coming soon to your favorite car, traffic sign recognition, so you will never, ever exceed the posted speed limit.

Essential Camera Systems

For example, the latest hardware suite on Tesla automobiles uses an NVIDIA Drive PX 2 processing platform that grabs data from eight cameras, ultrasonic sensors, and a radar system. The platform scales from a palm-sized, energy-efficient module for AutoCruise capabilities to a powerful AI supercomputer capable of full autonomous driving. The system can understand in real time what's happening around the vehicle, precisely locate itself on an HD map, and plan a safe path forward. It combines deep learning, sensor fusion, and surround vision to change the driving experience.

The performance of the camera system is critical to safety-assisted or autonomous vehicles. The cameras are, of course, spread out around the vehicle and are often quite a distance from the CPU. Their performance level determines how far away the ADAS can see an object, how small an object is detectable, and how fast the information is available—depending on resolution, dynamic range, and frame rates. Given the critical nature of the information from these devices, they cannot tolerate high error rates. They also have very high data rates. In surround-view systems each camera typically has a video stream with 1280 × 800 pixel resolution and a frame refresh rate of 30 f/s.

The automotive world uses a number of buses or networks including CAN, LIN, FlexRay, MOST, LVDS, and Ethernet. But the data rate required by the video links precludes the use of all of these, except perhaps LVDS and Ethernet.

A Better Solution

A better solution can be found in Gigabit Multimedia Serial Link (GMSL), which provides a compression-free alternative to Ethernet. As such, compared to Ethernet, GMSL delivers 10x faster data rates, 50% lower cabling costs, and better EMC. Maxim offers the MAX96707 and MAX96708 GMSL serializer/deserializer chips that use current mode logic (CML) for very high noise immunity and can send data over low-cost 50 Ω coax or 100 Ω twisted-pair cable for up to 15m. They work with megapixel cameras at up to 1.74Gbps serial-bit rate. Camera data clocks at 12.5MHz to 87MHz by 12-bit + H/V data or 36.66MHz to 116MHz by 12-bit + H/V Data (using internal encoding). The ICs share a 9.6kbps to 1Mbps I2C control channel between each other and to external sources for updates and setup. They feature automatic retransmission of control data upon error detection. The control channel is multiplexed onto the serial link and is available with or without the video channel.

The MAX96707 serializer IC has programmable pre/de-emphasis for driving longer cables. It has error detection of video and control data and features a cross-point switch for dual camera selection. Programmable spread spectrum is available on its serial output. The chip comes in a small 24-pin 4 x 4mm TQFN package and uses 1.7 to 1.9V supply. Maximum supply current is 88ma.

Fig 1. MAX96707 functional block diagram

The MAX96708 deserializer can track data from a spread-spectrum serial input, and the chip’s adaptive equalization greatly improves error rates. An output cross-point switch aids flexibility. The IC’s core supply range is 1.7 to 1.9V while the I/O supply is 1.7 to 3.6V. The device comes in a 32-pin 5 x 5mm TQFN package.

Fig 2. MAX96708COAXEVKIT# development kit.

Both chips operate over a -40º to 115ºC temperature range and have ±8kV contact and ±15kV air ESD protection, meeting IEC 61000-4-2 and ISO 10605 standards. Both are qualified to automotive standard AEC-Q100. An evaluation kit (Fig. 2) is available from distributors.

Certainly, if I’m designing an autonomous driving system, one key thing would be solid communications with the cameras. I would carefully check error rates in all camera connections in the actual vehicle setup and with worst-case noise conditions. GMSL technology seems to me to offer the very best chance for success in this critical area, along with meeting standards and giving high reliability.