The world's first dedicated AI processor for dynamic vision breaks the limitations of traditional static vision processing

The world's first dedicated AI processor for dynamic vision breaks the limitations of traditional static vision processing

tenco 2019-04-28

AiCTX, a Swiss high-tech brain chip company, has unveiled DynapCNN, the world's first pure event-driven dynamic visual AI processor, for alway real-time dynamic visual information processing with ultra-low power consumption.DynapCNN breaks the limitation of traditional frame-limited static visual processing technology and is tailored for the new dynamic visual processing with ultra-low latency and ultra-low power consumption. Its launch opens a new era of dynamic visual processing based on event-driven computing at the pixel level.

DynapCNN is a pure asynchronous, highly configurable and extensible neuromorphic processor.The chip area is 12mm2, and the GF22nm process design is adopted. The single chip integrates more than 1 million pulse neurons and 4 million programmable parameters, supporting a variety of CNN architectures. The expansibility of its chip architecture is suitable for realizing large-scale pulse convolutional neural network.DynapCNN is the world's first dedicated AI chip that combines the high performance of machine learning with the high energy efficiency of event-driven neural mimicry computing.As a new generation of dynamic vision processors, DynapCNN's dynamic vision solution not only improves computing energy efficiency by 100-1000 times, but also reduces the recognition response processing delay of existing real-time AI vision processing solutions by more than 10 times.


Based on pure asynchronous digital circuit design, DynapCNN is the world's first fully event-based AI processor.With the increasing demand for ultra-low power consumption local real-time intelligent processing in the Internet of things and mobile terminals, DynapCNN, which is suitable for edge computing and does not need to be connected to the cloud, will have more and more extensive application scenarios.DynapCNN's high flexibility and reconfigurability also provide infinite possibilities for the development and implementation of a series of AI models, while the event-triggered computing mechanism based on dynamic vision enables the chip to achieve sub-mw power.DynapCNN processor integrates special interface circuit, which can be directly connected to most dynamic cameras for face recognition, gesture recognition, high-speed moving object tracking, classification, behavior recognition and so on.

DynapCNN's ultra-low power intelligent vision processing benefits from its event-driven features and the integration of multiple unique aiCTX ips at the chip level."For real-time visual processing, almost all current application scenarios are based on the recognition of moving objects, such as gesture recognition, facial recognition, tracking and positioning of moving objects, etc..However, the traditional image processing system processes video data frame by frame. Even if the object in front of the camera does not change, the processor will process and calculate the picture of each frame.Unlike traditional methods, DynapCNN is event-driven, so our system can reduce the power consumption of real-time visual processing to almost zero if the object at the front of the sensor does not change.In addition, the chip USES sparse computing to process the movement of objects in the scene, further reducing the chip's dynamic power consumption.AiCTX CEO Dr. Jonen explains."The energy efficiency of DynapCNN means that the visual AI can be turned on all the time and the local real-time data processing with ultra-low power consumption of mW can be realized on the terminal device," added Dr. Joning. "this is something that traditional DL vision accelerator chips cannot achieve."

Based on the event-driven computing mechanism of pure asynchronous digital circuit, DynapCNN processor does not use high-speed clock, but is triggered by changes in the visual scene, and any pixel changes generated by moving objects will be processed by the processor in real time.By breaking the limitation of traditional visual processing frames, DynapCNN can continuously calculate the dynamic data stream of pixels, and realize ultra-low delay below 5ms for real-time recognition of moving objects.Compared with the existing DL real-time vision processing scheme, the ultra-low delay dynamic vision solution provided by DynapCNN reduces the recognition response delay by more than 10 times, providing a perfect solution for high-speed visual scenes such as vehicle and high-speed aircraft.

"Using local AI processors to power iot devices avoids the power and cost associated with uploading large amounts of sensor data to the cloud."Dr Sadique Sheik, senior research and development engineer at aiCTX, explained to the reporter, "DynapCNN supports ultra-low power edge computing for many scenarios of iot sensor vision, which is an efficient and economical solution and a protection of user privacy."Right now, almost all visual processing is done in the cloud.The aiCTX chip can do all the processing locally without sending video from the device to the cloud, which is a powerful move to provide end users with privacy and data protection.

DynapCNN's corresponding development suite will be available in the third quarter of 2019.More technical details and demonstrations about DynapCNN are coming soon.

AiCTX series chips

AiCTX series chips break the limitations of the traditional von neumann architecture and improve computing power by simulating the working mode of human brain neurons and the fully parallel computing architecture.Compared with traditional chips, the event triggering operation based on pure asynchronous circuit and the efficient network structure ensure that aiCTX series chips have the characteristics of ultra-low energy consumption and ultra-low delay, and are good at processing dynamic information.

天高科技