Building Vision System

Traditional embedded vision systems are implemented using FPGA/processor combinations, and are increasingly being implemented using on-chip systems that combine high-performance processors with FPGAs. In this article we will introduce the advanced elements of embedded vision systems; how to quickly and easily build embedded vision systems using software APIs and IP libraries, and how to add value-added parts of algorithm development to the image processing chain.

From simple monitoring systems such as surveillance cameras to more advanced applications, such as the Advanced Driver Assistance System (ADAS) and advanced production facilities and machine vision used in the factory, the embedded vision system is already in use in a range of applications. Get used. Regardless of the application, embedded vision systems have something in common. In general, they can be divided into the following three categories:

Device Interface — Provides an interface to the selected imaging device. Provide the required clock, offset, and configuration data based on the device type selected. This also allows image data to be received from the device and decoded and formatted as needed for further processing by the image processing chain.

Image Processing Chain — Receives image data through the device interface and performs operations such as color filter array interpolation and color gamut conversion (ie, color conversion to grayscale). This part is still in the image processing chain where we use a large number of algorithms for the received images. These can be simple algorithms such as noise reduction or edge enhancement, or much more complex algorithms such as object recognition or optical flow. A common situation is to invoke an algorithm implementation in the upstream portion of the image processing chain. Of course, the implementation complexity upstream of the image processing chain depends on the application to be implemented. The output formatting portion (which converts the processed image data into the correct format for output to the display or output via the communication interface) is referred to as the downstream portion.

System Monitoring and Control – This is a category that is independent of the device interface and image processing chain, providing system monitoring and control in two ways. The first one is inside the device and it provides:
Image Processing Chain Configuration Image Analysis Function Update image processing chain as required during algorithm execution

The second is the control and management of the broader embedded vision system, which provides:

Power management and image device power sequencing Perform self-test and other system management functions Network support communication or point-to-point communication Configure image devices via I2C or SPI before the first imaging operation

Some applications allow system monitoring functions to access the frame memory and execute algorithms on the frames therein. In this case the system monitoring can form part of the image processing chain.

These three categories require different implementation methods because of the different difficulties in each stage. Both the device interface and the image processing chain are required to handle high bandwidth data, thereby implementing an image processing chain internally and externally transferring image data from the system. System monitoring and control requirements are capable of processing and responding to commands received over the communication interface and supporting external communications. If system monitoring also forms part of the image processing chain, a high-performance processor is needed.

As such, traditional embedded vision systems are implemented using FPGA/processor combinations, and more and more are implemented using on-chip systems that combine high-performance processors with FPGAs. Before we demonstrate how the above aspects can be combined, let's take a look at the different difficulties of each of these three categories.

Device interface

The sensor interface is determined by the device chosen by the application, and most embedded vision applications use a CMOS image sensor (CIS). In general, these sensors use a CMOS parallel output bus, use control signals to indicate the effective order of lines and frames, or use a higher rate serial communication to achieve a simpler system interface, but this will result in a slightly more complex FPGA implementation. Compared to parallel buses, these serial data streams can transmit images over a smaller number of channels because they operate at much faster data rates, allowing the imager to support higher frame rates than parallel interfaces. . For synchronization, it is common practice to combine a data channel containing images and other data words with a synchronization channel that contains code words that define the content on the data channel. There is also a clock channel combined with the data channel and the sync channel because the interface uses source synchronization. These high speed serial channels are typically implemented as LVDS or Reduced Swing LVDS to reduce system noise and power consumption.

Regardless of the output image format, it is usually the CIS device that needs to be configured by the embedded vision system before any image is acquired. This is caused by the versatility of the CIS equipment. This versatility, while providing powerful on-chip processing, requires configuration with the correct settings before outputting the image. The bandwidth requirements of these interfaces are not as high as image transmission requirements, so I2C or SPI interface standards are often used.

Because of the high bandwidth of image data required, this interface is often implemented in an FPGA, which makes it easier to integrate with the image processing chain. The configuration interface of the CIS device generally uses I2C or SPI, which can be implemented either in FPGA or in a system monitoring and control processor that supports such an interface.

Image processing chain

The image processing chain consists of upstream and downstream components and interfaces, and pixel data is output through the device interface. However, the format of the received pixels may not be used to display the image correctly. We may need image correction, especially if a color imager is used. To maintain throughput at the required data rates, image processing chains are often implemented in FPGAs to take advantage of their parallelism. This allows the image processing pipeline to be generated so that each step of the processing chain is implemented in parallel, resulting in a higher frame rate. But for some applications we have to consider delays, especially for systems such as the Advanced Driver Assistance System (ADAS). In order to effectively establish the image processing chain, we need to use the universal interconnection protocol as the basis of the image processing kernel, so as to facilitate the processing of IP interconnection. This can bring two benefits: one is a library that can be reused; the other is because each IP core is designed to receive and send data according to defined criteria, thus facilitating the establishment of the pipeline. There are a variety of commonly used protocols to choose from, the most common of which is AXI because of its flexibility to support both memory-mapped and streamed interfaces.

Typical processing stages within the image processing chain include:

Solar Plus

Solar PV systems have to increase the supplied voltage,by typically 4 volts,to be able to push the generated energy around your property and if not used back onto the state national grid.

The increase in voltage and losses in wiring circuits can cause inverters to prematurely shut down on over voltage setting.

Solar Plus,Solar Panel Plus,PV Voltage Optimiser,Solar Voltage Optimiser

Jinan Xinyuhua Energy Technology Co.,Ltd , https://www.xyhenergy.com

This entry was posted in on